Conversation
# Conflicts: # model/common/pyproject.toml
# Conflicts: # model/common/src/icon4py/model/common/grid/base.py # model/common/src/icon4py/model/common/grid/grid_manager.py # model/common/src/icon4py/model/common/grid/simple.py # model/common/tests/decomposition_tests/mpi_tests/test_mpi_decomposition.py
df9c2ef to
72d4d4b
Compare
# Conflicts: # model/atmosphere/dycore/tests/dycore_tests/mpi_tests/conftest.py # model/atmosphere/subgrid_scale_physics/muphys/tests/muphys/fixtures.py # model/common/pyproject.toml # model/common/src/icon4py/model/common/decomposition/definitions.py # model/common/src/icon4py/model/common/grid/grid_manager.py # model/common/tests/common/decomposition/mpi_tests/test_mpi_decomposition.py # model/common/tests/common/decomposition/unit_tests/test_definitions.py # model/common/tests/common/grid/mpi_tests/test_parallel_icon.py # model/common/tests/common/io/unit_tests/test_io.py # model/common/tests/decomposition_tests/__init__.py # model/common/tests/decomposition_tests/mpi_tests/test_mpi_decomposition.py # model/common/tests/decomposition_tests/test_mpi_decomposition.py # model/testing/src/icon4py/model/testing/datatest_utils.py # model/testing/src/icon4py/model/testing/fixtures/datatest.py # model/testing/src/icon4py/model/testing/grid_utils.py # model/testing/src/icon4py/model/testing/parallel_helpers.py
| # TODO(msimberg): What should we do about this. (The global) num_cells is | ||
| # not guaranteed to be set here when used through fortran. Should we: | ||
| # 1. Ignore distributed? | ||
| # 2. Compute num_cells with a reduction? | ||
| # 3. Use a ProcessProperties to detect it? | ||
| distributed = ( | ||
| config.num_cells < global_properties.num_cells | ||
| if global_properties.num_cells is not None | ||
| else False | ||
| ) | ||
| limited_area_or_distributed = config.limited_area or distributed |
There was a problem hiding this comment.
This would be good to resolve before this PR is merged.
| # TODO(msimberg): Is halo always expected to be populated? | ||
| global_indices_local_field = decomposition_info.global_index( | ||
| dim, | ||
| decomp_defs.DecompositionInfo.EntryType.OWNED, # ALL if checking halos | ||
| ) | ||
| local_indices_local_field = decomposition_info.local_index( | ||
| dim, | ||
| decomp_defs.DecompositionInfo.EntryType.OWNED, # ALL if checking halos |
There was a problem hiding this comment.
This should still be fixed. I'm thinking to add a check_halos parameter to the test. For fields that have the halo populated, check that the halo has been correctly populated.
| # TODO(msimberg): Is this true? Not true for RBF itnerpolation... why? | ||
| # We expect an exact match, since the starting point is the same (grid | ||
| # file) and we are doing the exact same computations in single rank and | ||
| # multi rank mode. | ||
| np.testing.assert_allclose(sorted_, global_reference_field, atol=1e-9, verbose=True) |
There was a problem hiding this comment.
Fix or remove todo before merging.
My first guess is that neighbors may be differently ordered on the local patch compared to the global grid, but can that actually happen? If it can, I can see the cholesky decomposition leading to different results. If not, I would expect exact results.
model/common/tests/common/grid/mpi_tests/test_parallel_grid_manager.py
Outdated
Show resolved
Hide resolved
model/common/tests/common/decomposition/unit_tests/test_halo.py
Outdated
Show resolved
Hide resolved
model/common/tests/common/grid/mpi_tests/test_parallel_grid_manager.py
Outdated
Show resolved
Hide resolved
| attrs_name: str, | ||
| dim: gtx.Dimension, | ||
| ) -> None: | ||
| # TODO(msimberg): Currently segfaults. Are topography and vertical fields |
There was a problem hiding this comment.
@jcanton this is the test that segfaults. I've tried to set up a dummy vertical config and topography, but I may have misunderstood what needs to/can go in them. Does what I've added make sense or do you see any obvious mistakes? With embedded I get some non-segfault error messages that may be helpful to debug. I may just have set up the vertical levels incorrectly?
There was a problem hiding this comment.
It seems like this was just a matter of having a consistent num_levels. The grid managers used a default of 10 while the configs manually constructed for this test used experiment.num_levels. So the original segfault is fixed.
However, I'd still appreciate knowledgeable eyes on whether the vertical config and topography that I added makes sense for this test.
|
cscs-ci run default |
|
cssc-ci run distributed |
nfarabullini
left a comment
There was a problem hiding this comment.
Mostly minor things for now as I understand that more work will be done.
Great job so far!!
| else: | ||
| return IconLikeHaloConstructor( | ||
| run_properties=run_properties, | ||
| connectivities=connectivities, | ||
| allocator=allocator, | ||
| ) |
There was a problem hiding this comment.
| else: | |
| return IconLikeHaloConstructor( | |
| run_properties=run_properties, | |
| connectivities=connectivities, | |
| allocator=allocator, | |
| ) | |
| return IconLikeHaloConstructor( | |
| run_properties=run_properties, | |
| connectivities=connectivities, | |
| allocator=allocator, | |
| ) |
the else is not needed here
There was a problem hiding this comment.
I slightly prefer the explicit else, but don't feel strongly enough to oppose removing it either. Do you prefer it without the else?
There was a problem hiding this comment.
Generally I do because it looks cleaner to me, but I also don't have particularly strong feelings
| # 3. Use a ProcessProperties to detect it? | ||
| distributed = ( | ||
| config.num_cells < global_properties.num_cells | ||
| if global_properties.num_cells is not None |
There was a problem hiding this comment.
| if global_properties.num_cells is not None | |
| if global_properties.num_cells |
There was a problem hiding this comment.
This whole block is a TODO. Ideally the None wouldn't even be possible. Like the other None I still like the explicit comparison to None because 0 is falsy, though 0 would of course not be a useful value for num_cells.
Hopefully this whole thing is removed.
...ommon/src/icon4py/model/common/interpolation/stencils/compute_cell_2_vertex_interpolation.py
Outdated
Show resolved
Hide resolved
...ommon/src/icon4py/model/common/interpolation/stencils/compute_cell_2_vertex_interpolation.py
Outdated
Show resolved
Hide resolved
Co-authored-by: Nicoletta Farabullini <41536517+nfarabullini@users.noreply.github.com>
Co-authored-by: Nicoletta Farabullini <41536517+nfarabullini@users.noreply.github.com>
|
Mandatory Tests Please make sure you run these tests via comment before you merge!
Optional Tests To run benchmarks you can use:
To run tests and benchmarks with the DaCe backend you can use:
To run test levels ignored by the default test suite (mostly simple datatest for static fields computations) you can use:
For more detailed information please look at CI in the EXCLAIM universe. |
Decompose (global) grid file:
pymetisto decompose the global grid (cells) intonpatchesOmissions:
LAM grids need to be investigated further:
start_indexandend_indexnot in the halo construction.the number of halo lines (in terms of cells) is hardcoded to 2, that could be made a parameter.
Not sure it all runs on GPU correctly... most probably there are some
numpycupyissues to fix.